Skip to main content

Home/ TOK@ISPrague/ Group items tagged machine learning

Rss Feed Group items tagged

Lawrence Hrubes

This Cat Sensed Death. What if Computers Could, Too? - The New York Times - 0 views

  • So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?”
Lawrence Hrubes

Can A.I. Be Taught to Explain Itself? - The New York Times - 1 views

  • As machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know — or how they know it.
markfrankel18

Teaching robots right from wrong -- ScienceDaily - 0 views

  • In a project funded by the Office of Naval Research and coordinated under the Multidisciplinary University Research Initiative, scientists will explore the challenges of infusing autonomous robots with a sense for right, wrong, and the consequences of both. "Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree," says principal investigator Matthias Scheutz, professor of computer science at Tufts School of Engineering and director of the Human-Robot Interaction Laboratory (HRI Lab) at Tufts. "The question is whether machines -- or any other artificial system, for that matter -- can emulate and exercise these abilities." One scenario is a battlefield, he says. A robot medic responsible for helping wounded soldiers is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it? If the machine stops, a new set of questions arises. The robot assesses the soldier's physical state and determines that unless it applies traction, internal bleeding in the soldier's thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier pain, even if it's for the soldier's well-being?
Lawrence Hrubes

The Case for Banning Laptops in the Classroom : The New Yorker - 0 views

  • I banned laptops in the classroom after it became common practice to carry them to school. When I created my “electronic etiquette policy” (as I call it in my syllabus), I was acting on a gut feeling based on personal experience. I’d always figured that, for the kinds of computer-science and math classes that I generally teach, which can have a significant theoretical component, any advantage that might be gained by having a machine at the ready, or available for the primary goal of taking notes, was negligible at best. We still haven’t made it easy to type notation-laden sentences, so the potential benefits were low. Meanwhile, the temptation for distraction was high. I know that I have a hard time staying on task when the option to check out at any momentary lull is available; I assumed that this must be true for my students, as well. Over time, a wealth of studies on students’ use of computers in the classroom has accumulated to support this intuition. Among the most famous is a landmark Cornell University study from 2003 called “The Laptop and the Lecture,” wherein half of a class was allowed unfettered access to their computers during a lecture while the other half was asked to keep their laptops closed. The experiment showed that, regardless of the kind or duration of the computer use, the disconnected students performed better on a post-lecture quiz. The message of the study aligns pretty well with the evidence that multitasking degrades task performance across the board.
Lawrence Hrubes

The Great A.I. Awakening - The New York Times - 1 views

  • Translation, however, is an example of a field where this approach fails horribly, because words cannot be reduced to their dictionary definitions, and because languages tend to have as many exceptions as they have rules. More often than not, a system like this is liable to translate “minister of agriculture” as “priest of farming.” Still, for math and chess it worked great, and the proponents of symbolic A.I. took it for granted that no activities signaled “general intelligence” better than math and chess.
  • A rarefied department within the company, Google Brain, was founded five years ago on this very principle: that artificial “neural networks” that acquaint themselves with the world via trial and error, as toddlers do, might in turn develop something like human flexibility. This notion is not new — a version of it dates to the earliest stages of modern computing, in the 1940s — but for much of its history most computer scientists saw it as vaguely disreputable, even mystical. Since 2011, though, Google Brain has demonstrated that this approach to artificial intelligence could solve many problems that confounded decades of conventional efforts. Speech recognition didn’t work very well until Brain undertook an effort to revamp it; the application of machine learning made its performance on Google’s mobile platform, Android, almost as good as human transcription. The same was true of image recognition. Less than a year ago, Brain for the first time commenced with the gut renovation of an entire consumer product, and its momentous results were being celebrated tonight.
1 - 5 of 5
Showing 20 items per page